open-source ai
#AAAI2025 workshops round-up 2: Open-source AI for mainstream use, and federated learning for unbounded and intelligent decentralization
In this series of articles, we're publishing summaries with some of the key takeaways from a few of workshops held at the 39th Annual AAAI Conference on Artificial Intelligence (AAAI 2025). The first ever workshop on "Open Source AI for Mainstream Use" was held on March 4, 2025 at the Pennsylvania Convention Center in Philadelphia. The goal of this workshop was to bring the researchers and practitioners into a single forum to discuss topics at the intersection of AI and open source and demonstrate relevant technology. Overall, the participants appreciated the interdisciplinary nature of this workshop and are looking forward to repeating it next year. This first edition of the FLUID workshop focused on the emerging challenges and opportunities in federated learning and intelligent decentralization, bringing together a growing international community of researchers working across optimization, privacy, scalability, and practical deployment of decentralized learning systems.
We finally have a definition for open-source AI
Ayah Bdeir, a senior advisor to Mozilla and a participant in OSI's process, says certain parts of the open-source definition were relatively easy to agree upon, including the need to reveal model weights (the parameters that help determine how an AI model generates an output). Other parts of the deliberations were more contentious, particularly the question of how public training data should be. The lack of transparency about where training data comes from has led to innumerable lawsuits against big AI companies, from makers of large language models like OpenAI to music generators like Suno, which do not disclose much about their training sets beyond saying they contain "publicly accessible information." Ultimately, the new definition requires that open-source models provide information about the training data to the extent that "a skilled person can recreate a substantially equivalent system using the same or similar data." It's not a blanket requirement to share all training data sets, but it also goes further than what many proprietary models or even ostensibly open-source models do today. "Insisting on an ideologically pristine kind of gold standard that actually will not effectively be met by anybody ends up backfiring," Bdeir says.
Mark Zuckerberg Just Intensified the Battle for AI's Future
The tech industry is currently embroiled in a heated debate over the future of AI: should powerful systems be open-source and freely accessible, or closed and tightly monitored for dangers? On Tuesday, Meta CEO Mark Zuckerberg fired a salvo into this ongoing battle, publishing not just a new series of powerful AI models, but also a manifesto forcefully advocating for the open-source approach. The document, which was widely praised by venture capitalists and tech leaders like Elon Musk and Jack Dorsey, serves as both a philosophical treatise and a rallying cry for proponents of open-source AI development. It arrives as intensifying global efforts to regulate AI have galvanized resistance from open-source advocates, who see some of those potential laws as threats to innovation and accessibility. At the heart of Meta's announcement on Tuesday was the release of its latest generation of Llama large language models, the company's answer to ChatGPT.
- North America > United States > California (0.16)
- Asia > China (0.06)
- North America > United States > District of Columbia > Washington (0.05)
- Asia > Pakistan (0.05)
- Government (1.00)
- Information Technology > Services (0.61)
Why Chinese companies are betting on open-source AI
The good news is it's actually not that hard! I recently dug around and realized that many Chinese AI models are much more accessible overseas than I expected. You can access the majority of them either by registering accounts on their websites or using popular open-source AI platforms like Hugging Face. So I published this practical guide today that lists a dozen of the top Chinese LLM chatbots you can use and the methods to easily access them in minutes, from anywhere in the world. During my experiments with these models, one thing soon became clear: While most Chinese AI companies have set a higher bar for access to their products than their Western counterparts, a trend toward open-sourcing AI models is making them ever more accessible to an overseas audience.
The tech industry can't agree on what open-source AI means. That's a problem.
But there's a fundamental problem--no one can agree on what "open-source AI" means. On the face of it, open-source AI promises a future where anyone can take part in the technology's development. That could accelerate innovation, boost transparency, and give users greater control over systems that could soon reshape many aspects of our lives. But what even is it? What makes an AI model open source, and what disqualifies it?
- Information Technology > Software (1.00)
- Information Technology > Artificial Intelligence (1.00)
EU urged to protect grassroots AI research or risk losing out to US
The EU has been warned that it risks handing control of artificial intelligence to US tech firms if it does not act to protect grassroots research in its forthcoming AI bill. In an open letter coordinated by the German research group Laion, or Large-scale AI Open Network, the European parliament was told that "one-size-fits-all" rules risked eliminating open research and development. "Rules that require a researcher or developer to monitor or control downstream use could make it impossible to release open-source AI in Europe," which would "entrench large firms" and "hamper efforts to improve transparency, reduce competition, limit academic freedom, and drive investment in AI overseas", the letter says. It adds: "Europe cannot afford to lose AI sovereignty. Eliminating open-source R&D will leave the European scientific community and economy critically dependent on a handful of foreign and proprietary firms for essential AI infrastructure."
La veille de la cybersécurité
The EU's proposed Artificial Intelligence Act plans to restrict open-source AI. The proposed – and still debated – Artificial Intelligence Act (AIA) from the EU touches upon the regulation of open-source AI. But enforcing strict restrictions on the sharing and distribution of open-source general-purpose AI (GPAI) is a completely retrograde step. It is like rewinding the world back by 30 years. Open-source culture is the only reason why mankind was able to progress technology at such a light speed. Only recently AI researchers were able to embrace sharing their code for more transparency and verification but putting constraints on this movement will damage the cultural progress the scientific community has made.
Why the EU's Artificial Intelligence Act could harm innovation
The EU's proposed Artificial Intelligence Act plans to restrict open-source AI. The proposed – and still debated – Artificial Intelligence Act (AIA) from the EU touches upon the regulation of open-source AI. But enforcing strict restrictions on the sharing and distribution of open-source general-purpose AI (GPAI) is a completely retrograde step. It is like rewinding the world back by 30 years. Open-source culture is the only reason why mankind was able to progress technology at such a light speed. Only recently AI researchers were able to embrace sharing their code for more transparency and verification but putting constraints on this movement will damage the cultural progress the scientific community has made.
The Implications of Open-Source AI: Should You Release Your AI Source Code Publicly?
I am VP of Product Delivery at Reface – AI-app to swap faces on videos, GIFs, and images. As humanity continues to implement new AI-solutions into many industries, the question of these projects' usefulness and ethics is still often debated. Experts emphasize that developers of any new AI-tool must check its for goodness', be responsible for its safety and consider its potential risk of harm. Still, the question of who should control the creation and use of new AI-technologies, specifically in synthetic media and deepfake tools, remains open. And secondly, is there a risk of making the code of new tools available to the general public?